Artificial Intelligence

Computer Vision

Project: Facial Keypoint Detection


In this project, combined knowledge of computer vision techniques and deep learning to build and end-to-end facial keypoint recognition system! Facial keypoints include points around the eyes, nose, and mouth on any face and are used in many applications, from facial tracking to emotion recognition.

There are three main parts to this project:

Part 1 : Investigating OpenCV, pre-processing, and face detection

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!

Steps to Complete the Project

In this project, explore a few of the many computer vision algorithms built into the OpenCV library. This expansive computer vision library is now almost 20 years old and still growing!

The project itself is broken down into three large parts, then even further into separate steps.

Part 1 : Investigating OpenCV, pre-processing, and face detection

  • Step 0: Detect Faces Using a Haar Cascade Classifier
  • Step 1: Add Eye Detection
  • Step 2: De-noise an Image for Better Face Detection
  • Step 3: Blur an Image and Perform Edge Detection
  • Step 4: Automatically Hide the Identity of an Individual

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

  • Step 5: Create a CNN to Recognize Facial Keypoints
  • Step 6: Compile and Train the Model
  • Step 7: Visualize the Loss and Answer Questions

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!

  • Step 8: Build a Robust Facial Keypoints Detector (Complete the CV Pipeline)

Step 0: Detect Faces Using a Haar Cascade Classifier

Have you ever wondered how Facebook automatically tags images with your friends' faces?
Or How high-end cameras automatically find and focus on a certain person's face?

Applications like these depend heavily on the machine learning task known as face detection - which is the task of automatically finding faces in images containing people.

At its root face detection is a classification problem - that is a problem of distinguishing between distinct classes of things. With face detection these distinct classes are 1) images of human faces 2) everything else.

We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.

OpenCV provides many pre-trained face detectors, stored as XML files on github.

We have downloaded one of these detectors and stored it in the detector_architectures directory.

Import Resources

In the next python cell, we load in the required libraries for this section of the project.

In [2]:
# Import required libraries for this section

%matplotlib inline

import numpy as np
import matplotlib.pyplot as plt
import math
import cv2                     # OpenCV library for computer vision
from PIL import Image
import time 

Next, we load in and display a test image for performing face detection.

Note: by default OpenCV assumes the ordering of our image's color channels are Blue, then Green, then Red.

This is slightly out of order with most image types we'll use in these experiments, whose color channels are ordered Red, then Green, then Blue.

In order to switch the Blue and Red channels of our test image around we will use OpenCV's cvtColor function, which you can read more about by checking out some of its documentation located here.

This is a general utility function that can do other transformations too like converting a color image to grayscale, and transforming a standard color image to HSV color space.

In [2]:
# Load in color image for face detection
image = cv2.imread('images/test_image_1.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot our image using subplots to specify a size and title
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[2]:
<matplotlib.image.AxesImage at 0x1703dbd3080>

There are a lot of people - and faces - in this picture. 13 faces to be exact! In the next code cell, we demonstrate how to use a Haar Cascade classifier to detect all the faces in this test image.

This face detector uses information about patterns of intensity in an image to reliably detect faces under varying light conditions. So, to use this face detector, we'll first convert the image from color to grayscale.

Then, we load in the fully trained architecture of the face detector -- found in the file haarcascade_frontalface_default.xml - and use it on our image to find faces!

To learn more about the parameters of the detector see this post.

In [3]:
# Convert the RGB  image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[3]:
<matplotlib.image.AxesImage at 0x1703dc29d30>

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face.

Each detected face is a 1D array with four entries that specifies the bounding box of the detected face.

The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box.

The last two entries in the array (extracted here as w and h) specify the width and height of the box.


Step 1: Add Eye Detections

There are other pre-trained detectors available that use a Haar Cascade Classifier - including full human body detectors, license plate detectors, and more.

A full list of the pre-trained architectures can be found here.

To test your eye detector, we'll first read in a new test image with just a single face.

In [4]:
# Load in color image for face detection
image = cv2.imread('images/james.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot the RGB image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[4]:
<matplotlib.image.AxesImage at 0x1703dc8d320>

Notice that even though the image is a black and white image, we have read it in as a color image and so it will still need to be converted to grayscale in order to perform the most accurate face detection.

So, the next steps will be to convert this image to grayscale, then load OpenCV's face detector and run it with parameters that detect this face accurately.

In [5]:
# Convert the RGB  image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 1.25, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detection')
ax1.imshow(image_with_detections)
Number of faces detected: 1
Out[5]:
<matplotlib.image.AxesImage at 0x1703dce66a0>

(IMPLEMENTATION) Add an eye detector to the current face detection setup.

A Haar-cascade eye detector can be included in the same way that the face detector was and, in this first task, it will be your job to do just this.

To set up an eye detector, use the stored parameters of the eye cascade detector, called haarcascade_eye.xml, located in the detector_architectures subdirectory. In the next code cell, create your eye detector and store its detections.

A few notes before you get started:

First, make sure to give your loaded eye detector the variable name

eye_cascade

and give the list of eye regions you detect the variable name

eyes

Second, since we've already run the face detector over this image, you should only search for eyes within the rectangular face regions detected in faces. This will minimize false detections.

Lastly, once you've run your eye detector over the facial detection region, you should display the RGB image with both the face detection boxes (in red) and your eye detections (in green) to verify that everything works as expected.

In [29]:
# Make a copy of the original image to plot rectangle detections
image_with_detections = np.copy(image)   

# Loop over the detections and draw their corresponding face detection boxes
for (x,y,w,h) in faces:
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h),(255,0,0), 3)  
    
# Do not change the code above this comment!

    
## TODO: Add eye detection, using haarcascade_eye.xml, to the current face detector algorithm
## TODO: Loop over the eye detections and draw their corresponding boxes in green on image_with_detections

# Extract the pre-trained face detector from an xml file
eye_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_eye.xml')
face_img = None
#search for eyes within the rectangular face regions detected
for (x,y,w,h) in faces:
    face_img = gray[y:y+h, x:x+w]
    # Detect the eyes in face image
    eyes = eye_cascade.detectMultiScale(face_img, 1.16, 6)
    for (x_e,y_e,w_e,h_e) in eyes:
        cv2.rectangle(image_with_detections, (x + x_e,y +y_e), (x+x_e+w_e,y + y_e + h_e),(0,255,0), 3) 

# Plot the image with both faces and eyes detected
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])


#ax1.imshow(face_img)

ax1.set_title('Image with Face and Eye Detection')
ax1.imshow(image_with_detections)
Out[29]:
<matplotlib.image.AxesImage at 0x1704163e4a8>

(Optional) Add face and eye detection to your laptop camera

It's time to kick it up a notch, and add face and eye detection to your laptop's camera!

Afterwards, you'll be able to show off your creation like in the gif shown below - made with a completed version of the code!

Notice that not all of the detections here are perfect - and your result need not be perfect either.

Spent a small amount of time tuning the parameters of your detectors to get reasonable results, but don't hold out for perfection. If we wanted perfection we'd need to spend a ton of time tuning the parameters of each detector, cleaning up the input image frames, etc. You can think of this as more of a rapid prototype.

The next cell contains code for a wrapper function called laptop_camera_face_eye_detector that, when called, will activate your laptop's camera. Placed the relevant face and eye detection code in this wrapper function to implement face/eye detection and mark those detections on each image frame that your camera captures.

Before adding anything to the function, you can run it to get an idea of how it works - a small window should pop up showing you the live feed from your camera; you can press any key to close this window.

Note: Mac users may find that activating this function kills the kernel of their notebook every once in a while. If this happens to you, just restart your notebook's kernel, activate cell(s) containing any crucial import statements, and you'll be good to go!

In [166]:
### Add face and eye detection to this laptop camera function 
# Make sure to draw out all faces/eyes found in each frame on the shown video feed

import cv2
import time 

# wrapper function for face/eye detection with your laptop camera
def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(1) #I have 2 cameras

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    face_cascade = cv2.CascadeClassifier("detector_architectures/haarcascade_frontalface_default.xml")
    eyes_cascade = cv2.CascadeClassifier("detector_architectures/haarcascade_eye.xml")
    
    # Keep the video stream open
    while rval:
        gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
        faces = face_cascade.detectMultiScale(gray, 1.20, 6)
        for (x,y,w,h) in faces:
            cv2.rectangle(frame, (x,y), (x+w,y+h),(255,0,0), 3)  
    
        face_img = None
        #search for eyes within the rectangular face regions detected
        for (x,y,w,h) in faces:
            face_img = gray[y:y+h, x:x+w]
            # Detect the eyes in face image
            eyes = eye_cascade.detectMultiScale(face_img, 1.2, 8)
            for (x_e,y_e,w_e,h_e) in eyes:
                cv2.rectangle(frame, (x + x_e,y +y_e), (x+x_e+w_e,y + y_e + h_e),(0,255,0), 3) 

        
        # Plot the image from camera with all the face and eye detections marked
        cv2.imshow("face detection activated", frame)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key > 0: # Exit by pressing any key
            # Destroy windows 
            cv2.destroyAllWindows()
            
            # Make sure window closes on OSx
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
In [167]:
# Call the laptop camera face/eye detector function above
laptop_camera_go()

Step 2: De-noise an Image for Better Face Detection

Image quality is an important aspect of any computer vision task.

Typically, when creating a set of images to train a deep learning network, significant care is taken to ensure that training images are free of visual noise or artifacts that hinder object detection.

While computer vision algorithms - like a face detector - are typically trained on 'nice' data such as this, new test data doesn't always look so nice!

When applying a trained computer vision algorithm to a new piece of test data one often cleans it up first before feeding it in.

This sort of cleaning - referred to as pre-processing - can include a number of cleaning phases like blurring, de-noising, color transformations, etc., and many of these tasks can be accomplished using OpenCV.

In this short subsection we explore OpenCV's noise-removal functionality to see how we can clean up a noisy image, which we then feed into our trained face detector.

Create a noisy image to work with

In the next cell, we create an artificial noisy version of the previous multi-face image.

This is a little exaggerated - we don't typically get images that are this noisy - but image noise, or 'grainy-ness' in a digitial image - is a fairly common phenomenon.

In [69]:
# Load in the multi-face test image again
image = cv2.imread('images/test_image_1.jpg')

# Convert the image copy to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Make an array copy of this image
image_with_noise = np.asarray(image)

# Create noise - here we add noise sampled randomly from a Gaussian distribution: a common model for noise
noise_level = 40
noise = np.random.randn(image.shape[0],image.shape[1],image.shape[2])*noise_level

# Add this noise to the array image copy
image_with_noise = image_with_noise + noise

# Convert back to uint8 format
image_with_noise = np.asarray([np.uint8(np.clip(i,0,255)) for i in image_with_noise])

# Plot our noisy image!
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image')
ax1.imshow(image_with_noise)
Out[69]:
<matplotlib.image.AxesImage at 0x1704196ec18>

In the context of face detection, the problem with an image like this is that - due to noise - we may miss some faces or get false detections.

In the next cell we apply the same trained OpenCV detector with the same settings as before, to see what sort of detections we get.

In [70]:
# Convert the RGB  image to grayscale
gray_noise = cv2.cvtColor(image_with_noise, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray_noise, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image_with_noise)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 12
Out[70]:
<matplotlib.image.AxesImage at 0x1703ea0ba90>

With this added noise we now miss one of the faces!

(IMPLEMENTATION) De-noise this image for better face detection

Time to get your hands dirty: using OpenCV's built in color image de-noising functionality called fastNlMeansDenoisingColored - de-noise this image enough so that all the faces in the image are properly detected.

Once you have cleaned the image in the next cell, use the cell that follows to run our trained face detector over the cleaned image to check out its detections.

You can find its official documentation here and a useful example here.

Note: you can keep all parameters except photo_render fixed as shown in the second link above. Play around with the value of this parameter - see how it affects the resulting cleaned image.

In [76]:
## TODO: Use OpenCV's built in color image de-noising function to clean up our noisy image!

denoised_image = cv2.fastNlMeansDenoisingColored(image_with_noise,None,16,16,7,21)# your final de-noised image (should be RGB)

fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Denoised image')
ax1.imshow(denoised_image)
Out[76]:
<matplotlib.image.AxesImage at 0x170450d0160>
In [79]:
## TODO: Run the face detector on the de-noised image to improve your detections and display the result

# Convert the RGB  image to grayscale
gray_denoise = cv2.cvtColor(denoised_image, cv2.COLOR_RGB2GRAY)

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray_denoise, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(denoised_image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Denoised Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[79]:
<matplotlib.image.AxesImage at 0x170451dd470>

Step 3: Blur an Image and Perform Edge Detection

Now that we have developed a simple pipeline for detecting faces using OpenCV - let's start playing around with a few fun things we can do with all those detected faces!

Importance of Blur in Edge Detection

Edge detection is a concept that pops up almost everywhere in computer vision applications, as edge-based features (as well as features built on top of edges) are often some of the best features for e.g., object detection and recognition problems.

Edge detection is a dimension reduction technique - by keeping only the edges of an image we get to throw away a lot of non-discriminating information.

Typically the most useful kind of edge-detection is one that preserves only the important, global structures (ignoring local structures that aren't very discriminative).

So removing local structures / retaining global structures is a crucial pre-processing step to performing edge detection in an image, and blurring can do just that.

Below is an animated gif showing the result of an edge-detected cat taken from Wikipedia, where the image is gradually blurred more and more prior to edge detection.

When the animation begins you can't quite make out what it's a picture of, but as the animation evolves and local structures are removed via blurring the cat becomes visible in the edge-detected image.

Edge detection is a convolution performed on the image itself, and you can read about Canny edge detection on this OpenCV documentation page.

Canny edge detection

In the cell below we load in a test image, then apply Canny edge detection on it.

The original image is shown on the left panel of the figure, while the edge-detected version of the image is shown on the right.

Notice how the result looks very busy - there are too many little details preserved in the image before it is sent to the edge detector.

When applied in computer vision applications, edge detection should preserve global structure; doing away with local structures that don't help describe what objects are in the image.

In [80]:
# Load in the image
image = cv2.imread('images/fawzia.jpg')

# Convert to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)  

# Perform Canny edge detection
edges = cv2.Canny(gray,100,200)

# Dilate the image to amplify edges
edges = cv2.dilate(edges, None)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges, cmap='gray')
Out[80]:
<matplotlib.image.AxesImage at 0x1704524f6d8>

Without first blurring the image, and removing small, local structures, a lot of irrelevant edge content gets picked up and amplified by the detector (as shown in the right panel above).

(IMPLEMENTATION) Blur the image then perform edge detection

In the next cell, you will repeat this experiment - blurring the image first to remove these local structures, so that only the important boudnary details remain in the edge-detected image.

Blur the image by using OpenCV's filter2d functionality - which is discussed in this documentation page - and use an averaging kernel of width equal to 4.

In [87]:
### TODO: Blur the test imageusing OpenCV's filter2d functionality, 
# Use an averaging kernel, and a kernel width equal to 4
# Load in the image
image = cv2.imread('images/fawzia.jpg')

# Convert to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)  

kernel = np.ones((4,4),np.float32)/16
dst = cv2.filter2D(gray,-1,kernel)
    
## TODO: Then perform Canny edge detection and display the output

# Perform Canny edge detection
edges = cv2.Canny(dst,100,200)

# Dilate the image to amplify edges
edges = cv2.dilate(edges, None)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges, cmap='gray')
Out[87]:
<matplotlib.image.AxesImage at 0x1704de9eb00>

Step 4: Automatically Hide the Identity of an Individual

If you film something like a documentary or reality TV, you must get permission from every individual shown on film before you can show their face, otherwise you need to blur it out - by blurring the face a lot (so much so that even the global structures are obscured)!

This is also true for projects like Google's StreetView maps - an enormous collection of mapping images taken from a fleet of Google vehicles.

Because it would be impossible for Google to get the permission of every single person accidentally captured in one of these images they blur out everyone's faces, the detected images must automatically blur the identity of detected people.

Here's a few examples of folks caught in the camera of a Google street view vehicle.

Read in an image to perform identity detection

Let's try this out for ourselves. Use the face detection pipeline built above and what you know about using the filter2D to blur and image, and use these in tandem to hide the identity of the person in the following image - loaded in and printed in the next cell.

In [131]:
# Load in the image
image = cv2.imread('images/gus.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Display the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
Out[131]:
<matplotlib.image.AxesImage at 0x1700401b0b8>

(IMPLEMENTATION) Use blurring to hide the identity of an individual in an image

The idea here is to 1) automatically detect the face in this image, and then 2) blur it out! Make sure to adjust the parameters of the averaging blur filter to completely obscure this person's identity.

In [144]:
## TODO: Implement face detection
image_with_blur_face = np.copy(image)#saved to used later after we detect the faces

#image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 1.13, 6)

# Using averaging blur filter
kernel = np.ones((80, 80),np.float32) / 6400
    
## TODO: Blur the bounding box around each detected face using an averaging filter and display the result
for (x,y,w,h) in faces:
    if(w < 100) | (h < 100):# Too small - filtering mistakes
        continue  
    # Blur bounding box to the detections image
    image_with_blur_face[y:y+h, x:x+w] = cv2.filter2D(image_with_blur_face[y:y+h, x:x+w], -1, kernel)
  
# Display the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Image with blur face')
ax1.imshow(image_with_blur_face)
Out[144]:
<matplotlib.image.AxesImage at 0x17000da0dd8>

(Optional) Build identity protection into your laptop camera

In this optional task you can add identity protection to your laptop camera, using the previously completed code where you added face detection to your laptop camera - and the task above. You should be able to get reasonable results with little parameter tuning - like the one shown in the gif below.

As with the previous video task, to make this perfect would require significant effort - so don't strive for perfection here, strive for reasonable quality.

The next cell contains code a wrapper function called laptop_camera_identity_hider that - when called - will activate your laptop's camera. You need to place the relevant face detection and blurring code developed above in this function in order to blur faces entering your laptop camera's field of view.

Before adding anything to the function you can call it to get a hang of how it works - a small window will pop up showing you the live feed from your camera, you can press any key to close this window.

Note: Mac users may find that activating this function kills the kernel of their notebook every once in a while. If this happens to you, just restart your notebook's kernel, activate cell(s) containing any crucial import statements, and you'll be good to go!

In [3]:
### Insert face detection and blurring code into the wrapper below to create an identity protector on your laptop!
import cv2
import time 

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(1)  #I have 2 cameras

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
    # Using averaging blur filter
    kernel = np.ones((80, 80),np.float32) / 6400
        
    # Keep video stream open
    while rval:
        gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
        
        # Detect the faces in image
        faces = face_cascade.detectMultiScale(gray, 1.13, 6)

        ## TODO: Blur the bounding box around each detected face using an averaging filter and display the result
        for (x,y,w,h) in faces:
            if(w < 100) | (h < 100):# Too small - filtering mistakes
                continue  
            # Blur bounding box to the detections image
            frame[y:y+h, x:x+w] = cv2.filter2D(frame[y:y+h, x:x+w], -1, kernel)
            
        # Plot image from camera with detections marked
        cv2.imshow("face detection activated", frame)

        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key > 0: # Exit by pressing any key
            # Destroy windows
            cv2.destroyAllWindows()
            
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
        
In [6]:
# Run laptop identity hider
laptop_camera_go()

Step 5: Create a CNN to Recognize Facial Keypoints

OpenCV is often used in practice with other machine learning and deep learning libraries to produce interesting results. In this stage of the project you will create your own end-to-end pipeline - employing convolutional networks in keras along with OpenCV - to apply a "selfie" filter to streaming video and images.

You will start by creating and then training a convolutional network that can detect facial keypoints in a small dataset of cropped images of human faces. We then guide you towards OpenCV to expanding your detection algorithm to more general images. What are facial keypoints? Let's take a look at some examples.

Facial keypoints (also called facial landmarks) are the small blue-green dots shown on each of the faces in the image above - there are 15 keypoints marked in each image.

They mark important areas of the face - the eyes, corners of the mouth, the nose, etc.

Facial keypoints can be used in a variety of machine learning applications from face and emotion recognition to commercial applications like the image filters popularized by Snapchat.

Below we illustrate a filter that, using the results of this section, automatically places sunglasses on people in images (using the facial keypoints to place the glasses correctly on each face).

Here, the facial keypoints have been colored lime green for visualization purposes.

Make a facial keypoint detector

But first things first: how can we make a facial keypoint detector? Well, at a high level, notice that facial keypoint detection is a regression problem.

A single face corresponds to a set of 15 facial keypoints (a set of 15 corresponding $(x, y)$ coordinates, i.e., an output point).

Because our input data are images, we can employ a convolutional neural network to recognize patterns in our images and learn how to identify these keypoint given sets of labeled data.

In order to train a regressor, we need a training set - a set of facial image / facial keypoint pairs to train on. For this we will be using this dataset from Kaggle. We've already downloaded this data and placed it in the data directory.

Make sure that you have both the training and test data files.

The training dataset contains several thousand $96 \times 96$ grayscale images of cropped human faces, along with each face's 15 corresponding facial keypoints (also called landmarks) that have been placed by hand, and recorded in $(x, y)$ coordinates.

This wonderful resource also has a substantial testing set, which we will use in tinkering with our convolutional network.

To load in this data, run the Python cell below - notice we will load in both the training and testing sets.

The load_data function is in the included utils.py file.

In [2]:
from utils import *

# Load training set
X_train, y_train = load_data()
print("X_train.shape == {}".format(X_train.shape))
print("y_train.shape == {}; y_train.min == {:.3f}; y_train.max == {:.3f}".format(
    y_train.shape, y_train.min(), y_train.max()))

# Load testing set
X_test, _ = load_data(test=True)
print("X_test.shape == {}".format(X_test.shape))
Using TensorFlow backend.
X_train.shape == (2140, 96, 96, 1)
y_train.shape == (2140, 30); y_train.min == -0.920; y_train.max == 0.996
X_test.shape == (1783, 96, 96, 1)

The load_data function in utils.py originates from this excellent blog post, which you are strongly encouraged to read. Please take the time now to review this function.

Note how the output values - that is, the coordinates of each set of facial landmarks - have been normalized to take on values in the range $[-1, 1]$, while the pixel values of each input point (a facial image) have been normalized to the range $[0,1]$.

Note: The original Kaggle dataset contains some images with several missing keypoints.

For simplicity, the load_data function removes those images with missing labels from the dataset.

As an optional extension, you are welcome to amend the load_data function to include the incomplete data points.

Visualize the Training Data

Execute the code cell below to visualize a subset of the training data.

In [3]:
import matplotlib.pyplot as plt
%matplotlib inline

fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_train[i], y_train[i], ax)

For each training image, there are two landmarks per eyebrow (four total), three per eye (six total), four for the mouth, and one for the tip of the nose.

Review the plot_data function in utils.py to understand how the 30-dimensional training labels in y_train are mapped to facial locations, as this function will prove useful for your pipeline.

(IMPLEMENTATION) Specify the CNN Architecture

In this section, you will specify a neural network for predicting the locations of facial keypoints. Use the code cell below to specify the architecture of your neural network.

We have imported some layers that you may find useful for this task, but if you need to use more Keras layers, feel free to import them in the cell.

Your network should accept a $96 \times 96$ grayscale image as input, and it should output a vector with 30 entries, corresponding to the predicted (horizontal and vertical) locations of 15 facial keypoints.

If you are not sure where to start, you can find some useful starting architectures in this blog, but you are not permitted to copy any of the architectures that you find online.

In [37]:
# Import deep learning resources from Keras
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D, Dropout
from keras.layers import Flatten, Dense, GlobalAveragePooling2D


## TODO: Specify a CNN architecture
# Your model should accept 96x96 pixel graysale images in
# It should have a fully-connected output layer with 30 values (2 for each facial keypoint)


model = Sequential()

'''
I Started with 32 filters and after it, I'll give the model more and more filters,
from my experience, most of the times the best way is to start with 16 filters or 32 filters.
I'm using the most popular and recommended activation function 'relu' and padding='same' so we won't lose important data,
though in this case, it seems that the data on the edges is less important than the center, so it may be less important.
I give it kernel_size of 3, since from my experience its best to be set to 2 or 3,
so I start with 3 on the first layer and will change it to 2 on the others where I'll have smaller width and height input.
'''
model.add(Convolution2D(filters=32, kernel_size=3, padding='valid', activation='relu', 
                        input_shape=X_train.shape[1:]))

'''
After each convolution layer I added a pooling layer, it will progressively reduce the spatial size of the representation,
and reduce the number of parameters and amount of computation so it gives us the ability to increase the filters count
without having too many parameters, and it also helps to control overfitting.
I'm using the MaxPooling2D with pool_size=2 which is most common and recommended.
'''
model.add(MaxPooling2D(pool_size=2))

'''
After each convolution layer, I'm also Using Dropout in order to reduce even more the number of parameters
and amount of computation and helps to control overfitting.
'''
model.add(Dropout(0.3))

model.add(Convolution2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))

model.add(Convolution2D(filters=128, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))

model.add(Convolution2D(filters=256, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))

model.add(Convolution2D(filters=512, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))

'''
After all those layers I Added GlobalAveragePooling2D layer, doing it like in the ResNet-50 model (http://ethereon.github.io/netscope/#/gist/db945b393d40bfa26006)
it enforces correspondence between feature maps and categories, and also global average pooling is a structural regularizer,
which natively prevents overfitting for the overall structure (I get my knowledge from this paper: http://arxiv.org/pdf/1312.4400.pdf).
'''
model.add(GlobalAveragePooling2D())

'''
Finally I added a dense layer with output of 30 that will represent the 15 key points (15 for x + 15 for y),
and as activation I used the linear activation, I could use tanh activation cause it gives values between -1 to 1 
but it may act less linear around the 0 and since we have linear problem,
we better to used linear activation which means adding no activation at keras.
'''
model.add(Dense(30))

# Summarize the model
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_63 (Conv2D)           (None, 94, 94, 32)        320       
_________________________________________________________________
max_pooling2d_63 (MaxPooling (None, 47, 47, 32)        0         
_________________________________________________________________
dropout_64 (Dropout)         (None, 47, 47, 32)        0         
_________________________________________________________________
conv2d_64 (Conv2D)           (None, 47, 47, 64)        8256      
_________________________________________________________________
max_pooling2d_64 (MaxPooling (None, 23, 23, 64)        0         
_________________________________________________________________
dropout_65 (Dropout)         (None, 23, 23, 64)        0         
_________________________________________________________________
conv2d_65 (Conv2D)           (None, 23, 23, 128)       32896     
_________________________________________________________________
max_pooling2d_65 (MaxPooling (None, 11, 11, 128)       0         
_________________________________________________________________
dropout_66 (Dropout)         (None, 11, 11, 128)       0         
_________________________________________________________________
conv2d_66 (Conv2D)           (None, 11, 11, 256)       131328    
_________________________________________________________________
max_pooling2d_66 (MaxPooling (None, 5, 5, 256)         0         
_________________________________________________________________
dropout_67 (Dropout)         (None, 5, 5, 256)         0         
_________________________________________________________________
conv2d_67 (Conv2D)           (None, 5, 5, 512)         524800    
_________________________________________________________________
max_pooling2d_67 (MaxPooling (None, 2, 2, 512)         0         
_________________________________________________________________
dropout_68 (Dropout)         (None, 2, 2, 512)         0         
_________________________________________________________________
global_average_pooling2d_14  (None, 512)               0         
_________________________________________________________________
dense_15 (Dense)             (None, 30)                15390     
=================================================================
Total params: 712,990
Trainable params: 712,990
Non-trainable params: 0
_________________________________________________________________

Step 6: Compile and Train the Model

After specifying your architecture, you'll need to compile and train the model to detect facial keypoints'

(IMPLEMENTATION) Compile and Train the Model

Use the compile method to configure the learning process. Experiment with your choice of optimizer; you may have some ideas about which will work best (SGD vs. RMSprop, etc), but take the time to empirically verify your theories.

Use the fit method to train the model. Break off a validation set by setting validation_split=0.2. Save the returned History object in the history variable.

Your model is required to attain a validation loss (measured as mean squared error) of at least XYZ. When you have finished training, save your model as an HDF5 file with file path my_model.h5.

In [38]:
from keras.optimizers import SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam
from keras.callbacks import ModelCheckpoint 

## TODO: Compile the model
model.compile(optimizer='Adamax', loss="mean_squared_error", metrics=['accuracy'])

## TODO: Save the model as model.h5
# Saving the best model
checkpointer = ModelCheckpoint(filepath='my_model.h5', 
                               verbose=1, save_best_only=True)

## TODO: Train the model
hist = model.fit(X_train, y_train, batch_size=32, epochs=250, verbose=1, 
          validation_split=0.2, callbacks=[checkpointer])
Train on 1712 samples, validate on 428 samples
Epoch 1/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0157 - acc: 0.6256Epoch 00000: val_loss improved from inf to 0.03979, saving model to my_model.h5
1712/1712 [==============================] - 3s - loss: 0.0156 - acc: 0.6256 - val_loss: 0.0398 - val_acc: 0.6963
Epoch 2/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0057 - acc: 0.6692Epoch 00001: val_loss improved from 0.03979 to 0.03489, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0057 - acc: 0.6694 - val_loss: 0.0349 - val_acc: 0.6963
Epoch 3/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0054 - acc: 0.6869Epoch 00002: val_loss improved from 0.03489 to 0.02944, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0054 - acc: 0.6857 - val_loss: 0.0294 - val_acc: 0.6963
Epoch 4/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0052 - acc: 0.6834Epoch 00003: val_loss improved from 0.02944 to 0.02760, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0052 - acc: 0.6828 - val_loss: 0.0276 - val_acc: 0.6963
Epoch 5/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0051 - acc: 0.6875Epoch 00004: val_loss improved from 0.02760 to 0.02187, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0051 - acc: 0.6857 - val_loss: 0.0219 - val_acc: 0.6963
Epoch 6/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0051 - acc: 0.6940Epoch 00005: val_loss improved from 0.02187 to 0.02090, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0051 - acc: 0.6933 - val_loss: 0.0209 - val_acc: 0.6963
Epoch 7/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0050 - acc: 0.6922Epoch 00006: val_loss improved from 0.02090 to 0.01796, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0050 - acc: 0.6939 - val_loss: 0.0180 - val_acc: 0.6963
Epoch 8/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0050 - acc: 0.6958Epoch 00007: val_loss improved from 0.01796 to 0.01757, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0050 - acc: 0.6957 - val_loss: 0.0176 - val_acc: 0.6963
Epoch 9/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0048 - acc: 0.6975Epoch 00008: val_loss improved from 0.01757 to 0.01625, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0048 - acc: 0.6992 - val_loss: 0.0162 - val_acc: 0.6963
Epoch 10/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0048 - acc: 0.7022Epoch 00009: val_loss improved from 0.01625 to 0.01406, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0048 - acc: 0.7033 - val_loss: 0.0141 - val_acc: 0.6963
Epoch 11/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0048 - acc: 0.7005Epoch 00010: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0048 - acc: 0.6998 - val_loss: 0.0156 - val_acc: 0.6963
Epoch 12/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0048 - acc: 0.7052Epoch 00011: val_loss improved from 0.01406 to 0.01393, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0048 - acc: 0.7039 - val_loss: 0.0139 - val_acc: 0.6963
Epoch 13/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0048 - acc: 0.7017Epoch 00012: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0048 - acc: 0.7015 - val_loss: 0.0156 - val_acc: 0.6963
Epoch 14/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0048 - acc: 0.7022- ETA: 1s - loEpoch 00013: val_loss improved from 0.01393 to 0.01332, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0048 - acc: 0.7015 - val_loss: 0.0133 - val_acc: 0.6963
Epoch 15/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0047 - acc: 0.7022Epoch 00014: val_loss improved from 0.01332 to 0.01103, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0047 - acc: 0.7004 - val_loss: 0.0110 - val_acc: 0.6963
Epoch 16/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0047 - acc: 0.6969Epoch 00015: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0048 - acc: 0.6974 - val_loss: 0.0125 - val_acc: 0.6963
Epoch 17/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0046 - acc: 0.7046Epoch 00016: val_loss improved from 0.01103 to 0.01036, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0046 - acc: 0.7050 - val_loss: 0.0104 - val_acc: 0.6963
Epoch 18/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0046 - acc: 0.7081Epoch 00017: val_loss improved from 0.01036 to 0.00996, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0047 - acc: 0.7068 - val_loss: 0.0100 - val_acc: 0.6963
Epoch 19/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0047 - acc: 0.7017Epoch 00018: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0047 - acc: 0.7039 - val_loss: 0.0103 - val_acc: 0.6963
Epoch 20/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0046 - acc: 0.7040Epoch 00019: val_loss improved from 0.00996 to 0.00834, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0046 - acc: 0.7027 - val_loss: 0.0083 - val_acc: 0.6963
Epoch 21/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0046 - acc: 0.7070Epoch 00020: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0046 - acc: 0.7079 - val_loss: 0.0101 - val_acc: 0.6963
Epoch 22/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0045 - acc: 0.7046- ETA: 1s - loEpoch 00021: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0045 - acc: 0.7044 - val_loss: 0.0097 - val_acc: 0.6963
Epoch 23/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0045 - acc: 0.7046Epoch 00022: val_loss improved from 0.00834 to 0.00784, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0045 - acc: 0.7044 - val_loss: 0.0078 - val_acc: 0.6963
Epoch 24/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0045 - acc: 0.7058Epoch 00023: val_loss improved from 0.00784 to 0.00716, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0045 - acc: 0.7062 - val_loss: 0.0072 - val_acc: 0.6963
Epoch 25/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0044 - acc: 0.7070Epoch 00024: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0044 - acc: 0.7068 - val_loss: 0.0075 - val_acc: 0.6963
Epoch 26/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0042 - acc: 0.7046Epoch 00025: val_loss improved from 0.00716 to 0.00689, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0042 - acc: 0.7033 - val_loss: 0.0069 - val_acc: 0.6986
Epoch 27/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0042 - acc: 0.7017Epoch 00026: val_loss improved from 0.00689 to 0.00642, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0042 - acc: 0.7015 - val_loss: 0.0064 - val_acc: 0.6986
Epoch 28/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0042 - acc: 0.7011Epoch 00027: val_loss improved from 0.00642 to 0.00620, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0042 - acc: 0.7027 - val_loss: 0.0062 - val_acc: 0.6986
Epoch 29/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0041 - acc: 0.7075Epoch 00028: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0041 - acc: 0.7074 - val_loss: 0.0067 - val_acc: 0.6986
Epoch 30/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0040 - acc: 0.7070Epoch 00029: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0040 - acc: 0.7085 - val_loss: 0.0067 - val_acc: 0.7009
Epoch 31/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0039 - acc: 0.7058Epoch 00030: val_loss improved from 0.00620 to 0.00615, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0039 - acc: 0.7056 - val_loss: 0.0062 - val_acc: 0.7009
Epoch 32/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0038 - acc: 0.7087Epoch 00031: val_loss improved from 0.00615 to 0.00597, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0037 - acc: 0.7091 - val_loss: 0.0060 - val_acc: 0.7056
Epoch 33/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0037 - acc: 0.7040Epoch 00032: val_loss improved from 0.00597 to 0.00521, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0037 - acc: 0.7044 - val_loss: 0.0052 - val_acc: 0.7056
Epoch 34/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0036 - acc: 0.7005Epoch 00033: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0036 - acc: 0.7021 - val_loss: 0.0053 - val_acc: 0.7033
Epoch 35/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0035 - acc: 0.7040Epoch 00034: val_loss improved from 0.00521 to 0.00492, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0035 - acc: 0.7056 - val_loss: 0.0049 - val_acc: 0.7150
Epoch 36/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0034 - acc: 0.7081- ETA: 1s - loEpoch 00035: val_loss improved from 0.00492 to 0.00480, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0034 - acc: 0.7074 - val_loss: 0.0048 - val_acc: 0.7056
Epoch 37/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0033 - acc: 0.7093Epoch 00036: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0033 - acc: 0.7085 - val_loss: 0.0055 - val_acc: 0.7033
Epoch 38/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0031 - acc: 0.7005Epoch 00037: val_loss improved from 0.00480 to 0.00469, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0032 - acc: 0.7004 - val_loss: 0.0047 - val_acc: 0.7243
Epoch 39/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0031 - acc: 0.7075Epoch 00038: val_loss improved from 0.00469 to 0.00346, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0031 - acc: 0.7085 - val_loss: 0.0035 - val_acc: 0.7126
Epoch 40/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0031 - acc: 0.7075Epoch 00039: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0031 - acc: 0.7074 - val_loss: 0.0057 - val_acc: 0.7079
Epoch 41/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0030 - acc: 0.7123Epoch 00040: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0030 - acc: 0.7126 - val_loss: 0.0038 - val_acc: 0.7079
Epoch 42/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0029 - acc: 0.7123Epoch 00041: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0029 - acc: 0.7120 - val_loss: 0.0046 - val_acc: 0.7150
Epoch 43/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0028 - acc: 0.7188Epoch 00042: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0028 - acc: 0.7185 - val_loss: 0.0043 - val_acc: 0.7103
Epoch 44/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0028 - acc: 0.7070Epoch 00043: val_loss improved from 0.00346 to 0.00340, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0027 - acc: 0.7056 - val_loss: 0.0034 - val_acc: 0.7150
Epoch 45/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0026 - acc: 0.7188Epoch 00044: val_loss improved from 0.00340 to 0.00324, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0026 - acc: 0.7190 - val_loss: 0.0032 - val_acc: 0.7150
Epoch 46/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0026 - acc: 0.7158Epoch 00045: val_loss improved from 0.00324 to 0.00247, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0026 - acc: 0.7161 - val_loss: 0.0025 - val_acc: 0.7173
Epoch 47/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0025 - acc: 0.7123Epoch 00046: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0025 - acc: 0.7109 - val_loss: 0.0027 - val_acc: 0.7266
Epoch 48/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0025 - acc: 0.7146Epoch 00047: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0025 - acc: 0.7120 - val_loss: 0.0035 - val_acc: 0.7196
Epoch 49/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0024 - acc: 0.7134Epoch 00048: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0024 - acc: 0.7138 - val_loss: 0.0028 - val_acc: 0.7290
Epoch 50/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0023 - acc: 0.7193Epoch 00049: val_loss improved from 0.00247 to 0.00243, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0023 - acc: 0.7190 - val_loss: 0.0024 - val_acc: 0.7290
Epoch 51/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0022 - acc: 0.7270Epoch 00050: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0022 - acc: 0.7266 - val_loss: 0.0025 - val_acc: 0.7313
Epoch 52/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0021 - acc: 0.7105Epoch 00051: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0022 - acc: 0.7097 - val_loss: 0.0026 - val_acc: 0.7336
Epoch 53/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0021 - acc: 0.7241Epoch 00052: val_loss improved from 0.00243 to 0.00219, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0021 - acc: 0.7237 - val_loss: 0.0022 - val_acc: 0.7360
Epoch 54/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0020 - acc: 0.7294Epoch 00053: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0020 - acc: 0.7296 - val_loss: 0.0026 - val_acc: 0.7266
Epoch 55/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0020 - acc: 0.7294Epoch 00054: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0020 - acc: 0.7296 - val_loss: 0.0026 - val_acc: 0.7407
Epoch 56/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0020 - acc: 0.7453Epoch 00055: val_loss improved from 0.00219 to 0.00202, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0020 - acc: 0.7459 - val_loss: 0.0020 - val_acc: 0.7407
Epoch 57/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7471Epoch 00056: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0019 - acc: 0.7482 - val_loss: 0.0022 - val_acc: 0.7407
Epoch 58/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7323Epoch 00057: val_loss improved from 0.00202 to 0.00172, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0019 - acc: 0.7336 - val_loss: 0.0017 - val_acc: 0.7430
Epoch 59/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7465Epoch 00058: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0018 - acc: 0.7471 - val_loss: 0.0020 - val_acc: 0.7407
Epoch 60/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7529Epoch 00059: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0018 - acc: 0.7535 - val_loss: 0.0018 - val_acc: 0.7430
Epoch 61/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7305- ETA: 1s - loEpoch 00060: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0018 - acc: 0.7290 - val_loss: 0.0017 - val_acc: 0.7453
Epoch 62/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7500Epoch 00061: val_loss improved from 0.00172 to 0.00170, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0017 - acc: 0.7488 - val_loss: 0.0017 - val_acc: 0.7383
Epoch 63/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7565- ETA: 1s - loEpoch 00062: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0016 - acc: 0.7576 - val_loss: 0.0019 - val_acc: 0.7430
Epoch 64/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7518Epoch 00063: val_loss improved from 0.00170 to 0.00164, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0017 - acc: 0.7518 - val_loss: 0.0016 - val_acc: 0.7710
Epoch 65/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7500Epoch 00064: val_loss improved from 0.00164 to 0.00158, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0016 - acc: 0.7512 - val_loss: 0.0016 - val_acc: 0.7477
Epoch 66/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7653Epoch 00065: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0015 - acc: 0.7658 - val_loss: 0.0017 - val_acc: 0.7383
Epoch 67/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7659Epoch 00066: val_loss improved from 0.00158 to 0.00149, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0015 - acc: 0.7658 - val_loss: 0.0015 - val_acc: 0.7407
Epoch 68/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7624Epoch 00067: val_loss improved from 0.00149 to 0.00147, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0015 - acc: 0.7623 - val_loss: 0.0015 - val_acc: 0.7453
Epoch 69/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7742Epoch 00068: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0015 - acc: 0.7745 - val_loss: 0.0017 - val_acc: 0.7757
Epoch 70/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7630Epoch 00069: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0014 - acc: 0.7640 - val_loss: 0.0018 - val_acc: 0.7617
Epoch 71/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7594Epoch 00070: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0014 - acc: 0.7582 - val_loss: 0.0016 - val_acc: 0.7523
Epoch 72/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7695Epoch 00071: val_loss improved from 0.00147 to 0.00142, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0014 - acc: 0.7704 - val_loss: 0.0014 - val_acc: 0.7827
Epoch 73/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7777Epoch 00072: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0014 - acc: 0.7780 - val_loss: 0.0016 - val_acc: 0.7407
Epoch 74/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7730Epoch 00073: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0013 - acc: 0.7739 - val_loss: 0.0016 - val_acc: 0.7780
Epoch 75/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7889Epoch 00074: val_loss improved from 0.00142 to 0.00138, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0013 - acc: 0.7891 - val_loss: 0.0014 - val_acc: 0.7804
Epoch 76/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7824Epoch 00075: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0013 - acc: 0.7833 - val_loss: 0.0015 - val_acc: 0.7734
Epoch 77/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7871Epoch 00076: val_loss improved from 0.00138 to 0.00135, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0013 - acc: 0.7880 - val_loss: 0.0014 - val_acc: 0.7593
Epoch 78/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7883Epoch 00077: val_loss improved from 0.00135 to 0.00135, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0012 - acc: 0.7886 - val_loss: 0.0014 - val_acc: 0.7804
Epoch 79/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7954Epoch 00078: val_loss improved from 0.00135 to 0.00127, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0012 - acc: 0.7956 - val_loss: 0.0013 - val_acc: 0.7757
Epoch 80/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7895Epoch 00079: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0012 - acc: 0.7897 - val_loss: 0.0014 - val_acc: 0.7874
Epoch 81/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7759Epoch 00080: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0011 - acc: 0.7751 - val_loss: 0.0014 - val_acc: 0.7874
Epoch 82/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.8031Epoch 00081: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0012 - acc: 0.8026 - val_loss: 0.0015 - val_acc: 0.7640
Epoch 83/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7983Epoch 00082: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0011 - acc: 0.7979 - val_loss: 0.0013 - val_acc: 0.7710
Epoch 84/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8013Epoch 00083: val_loss improved from 0.00127 to 0.00120, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0011 - acc: 0.8014 - val_loss: 0.0012 - val_acc: 0.7804
Epoch 85/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7978Epoch 00084: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0011 - acc: 0.7956 - val_loss: 0.0013 - val_acc: 0.7804
Epoch 86/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8001Epoch 00085: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0011 - acc: 0.8014 - val_loss: 0.0013 - val_acc: 0.7991
Epoch 87/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8096Epoch 00086: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0010 - acc: 0.8090 - val_loss: 0.0015 - val_acc: 0.7640
Epoch 88/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7919Epoch 00087: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 0.0010 - acc: 0.7921 - val_loss: 0.0014 - val_acc: 0.7850
Epoch 89/250
1696/1712 [============================>.] - ETA: 0s - loss: 9.7636e-04 - acc: 0.8072Epoch 00088: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 9.7677e-04 - acc: 0.8049 - val_loss: 0.0015 - val_acc: 0.7921
Epoch 90/250
1696/1712 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8078Epoch 00089: val_loss improved from 0.00120 to 0.00117, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 0.0010 - acc: 0.8084 - val_loss: 0.0012 - val_acc: 0.7804
Epoch 91/250
1696/1712 [============================>.] - ETA: 0s - loss: 9.8118e-04 - acc: 0.7995Epoch 00090: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 9.8068e-04 - acc: 0.7991 - val_loss: 0.0013 - val_acc: 0.7967
Epoch 92/250
1696/1712 [============================>.] - ETA: 0s - loss: 9.6269e-04 - acc: 0.8078Epoch 00091: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 9.6644e-04 - acc: 0.8067 - val_loss: 0.0013 - val_acc: 0.7710
Epoch 93/250
1696/1712 [============================>.] - ETA: 0s - loss: 9.6356e-04 - acc: 0.8125Epoch 00092: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 9.6291e-04 - acc: 0.8119 - val_loss: 0.0013 - val_acc: 0.8107
Epoch 94/250
1696/1712 [============================>.] - ETA: 0s - loss: 9.2515e-04 - acc: 0.8125Epoch 00093: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 9.2550e-04 - acc: 0.8125 - val_loss: 0.0012 - val_acc: 0.7967
Epoch 95/250
1696/1712 [============================>.] - ETA: 0s - loss: 9.4007e-04 - acc: 0.8060Epoch 00094: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 9.4056e-04 - acc: 0.8061 - val_loss: 0.0012 - val_acc: 0.7827
Epoch 96/250
1696/1712 [============================>.] - ETA: 0s - loss: 9.1136e-04 - acc: 0.8154Epoch 00095: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 9.1193e-04 - acc: 0.8166 - val_loss: 0.0012 - val_acc: 0.8037
Epoch 97/250
1696/1712 [============================>.] - ETA: 0s - loss: 9.0370e-04 - acc: 0.8154Epoch 00096: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 9.0698e-04 - acc: 0.8160 - val_loss: 0.0012 - val_acc: 0.7991
Epoch 98/250
1696/1712 [============================>.] - ETA: 0s - loss: 8.7748e-04 - acc: 0.8131Epoch 00097: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 8.8062e-04 - acc: 0.8131 - val_loss: 0.0014 - val_acc: 0.8037
Epoch 99/250
1696/1712 [============================>.] - ETA: 0s - loss: 8.7297e-04 - acc: 0.8267Epoch 00098: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 8.7481e-04 - acc: 0.8265 - val_loss: 0.0012 - val_acc: 0.7897
Epoch 100/250
1696/1712 [============================>.] - ETA: 0s - loss: 8.5665e-04 - acc: 0.8196Epoch 00099: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 8.5505e-04 - acc: 0.8183 - val_loss: 0.0012 - val_acc: 0.7921
Epoch 101/250
1696/1712 [============================>.] - ETA: 0s - loss: 8.4212e-04 - acc: 0.8143Epoch 00100: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 8.4365e-04 - acc: 0.8148 - val_loss: 0.0012 - val_acc: 0.7991
Epoch 102/250
1696/1712 [============================>.] - ETA: 0s - loss: 8.9007e-04 - acc: 0.8090Epoch 00101: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 8.8985e-04 - acc: 0.8090 - val_loss: 0.0012 - val_acc: 0.8037
Epoch 103/250
1696/1712 [============================>.] - ETA: 0s - loss: 8.2839e-04 - acc: 0.8202Epoch 00102: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 8.2898e-04 - acc: 0.8218 - val_loss: 0.0012 - val_acc: 0.8084
Epoch 104/250
1696/1712 [============================>.] - ETA: 0s - loss: 8.1150e-04 - acc: 0.8231Epoch 00103: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 8.1034e-04 - acc: 0.8230 - val_loss: 0.0016 - val_acc: 0.7991
Epoch 105/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.9819e-04 - acc: 0.8296Epoch 00104: val_loss improved from 0.00117 to 0.00112, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 7.9863e-04 - acc: 0.8294 - val_loss: 0.0011 - val_acc: 0.7991
Epoch 106/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.8724e-04 - acc: 0.8290Epoch 00105: val_loss improved from 0.00112 to 0.00109, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 7.9044e-04 - acc: 0.8294 - val_loss: 0.0011 - val_acc: 0.8084
Epoch 107/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.8913e-04 - acc: 0.8202Epoch 00106: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.8787e-04 - acc: 0.8207 - val_loss: 0.0013 - val_acc: 0.7991
Epoch 108/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.6621e-04 - acc: 0.8190Epoch 00107: val_loss improved from 0.00109 to 0.00105, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 7.6492e-04 - acc: 0.8195 - val_loss: 0.0011 - val_acc: 0.8084
Epoch 109/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.7169e-04 - acc: 0.8219Epoch 00108: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.7386e-04 - acc: 0.8224 - val_loss: 0.0012 - val_acc: 0.8037
Epoch 110/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.4755e-04 - acc: 0.8337Epoch 00109: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.4568e-04 - acc: 0.8353 - val_loss: 0.0012 - val_acc: 0.7967
Epoch 111/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.5370e-04 - acc: 0.8219Epoch 00110: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.5185e-04 - acc: 0.8218 - val_loss: 0.0012 - val_acc: 0.8084
Epoch 112/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.3208e-04 - acc: 0.8367Epoch 00111: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.3219e-04 - acc: 0.8370 - val_loss: 0.0011 - val_acc: 0.8014
Epoch 113/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.2422e-04 - acc: 0.8290Epoch 00112: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.2303e-04 - acc: 0.8289 - val_loss: 0.0012 - val_acc: 0.8084
Epoch 114/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.3284e-04 - acc: 0.8337Epoch 00113: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.3196e-04 - acc: 0.8335 - val_loss: 0.0011 - val_acc: 0.8014
Epoch 115/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.4091e-04 - acc: 0.8208Epoch 00114: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.4200e-04 - acc: 0.8195 - val_loss: 0.0012 - val_acc: 0.8341
Epoch 116/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.1885e-04 - acc: 0.8261Epoch 00115: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.2039e-04 - acc: 0.8242 - val_loss: 0.0014 - val_acc: 0.7991
Epoch 117/250
1696/1712 [============================>.] - ETA: 0s - loss: 7.0644e-04 - acc: 0.8367Epoch 00116: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 7.0619e-04 - acc: 0.8376 - val_loss: 0.0013 - val_acc: 0.8154
Epoch 118/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.9974e-04 - acc: 0.8402Epoch 00117: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.9905e-04 - acc: 0.8400 - val_loss: 0.0011 - val_acc: 0.8084
Epoch 119/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.8687e-04 - acc: 0.8443Epoch 00118: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.8591e-04 - acc: 0.8458 - val_loss: 0.0012 - val_acc: 0.8178
Epoch 120/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.8883e-04 - acc: 0.8438Epoch 00119: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.8853e-04 - acc: 0.8423 - val_loss: 0.0011 - val_acc: 0.8178
Epoch 121/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.7621e-04 - acc: 0.8479Epoch 00120: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.7653e-04 - acc: 0.8470 - val_loss: 0.0012 - val_acc: 0.7967
Epoch 122/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.8789e-04 - acc: 0.8396Epoch 00121: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.8805e-04 - acc: 0.8394 - val_loss: 0.0013 - val_acc: 0.8014
Epoch 123/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.5338e-04 - acc: 0.8296Epoch 00122: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.5390e-04 - acc: 0.8300 - val_loss: 0.0011 - val_acc: 0.7991
Epoch 124/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.4966e-04 - acc: 0.8426Epoch 00123: val_loss improved from 0.00105 to 0.00105, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 6.5091e-04 - acc: 0.8423 - val_loss: 0.0011 - val_acc: 0.8154
Epoch 125/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.4407e-04 - acc: 0.8591Epoch 00124: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.4233e-04 - acc: 0.8592 - val_loss: 0.0012 - val_acc: 0.8201
Epoch 126/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.2937e-04 - acc: 0.8420Epoch 00125: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.3148e-04 - acc: 0.8435 - val_loss: 0.0011 - val_acc: 0.8201
Epoch 127/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.6334e-04 - acc: 0.8414Epoch 00126: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.6330e-04 - acc: 0.8417 - val_loss: 0.0012 - val_acc: 0.8154
Epoch 128/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.2335e-04 - acc: 0.8491Epoch 00127: val_loss improved from 0.00105 to 0.00104, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 6.2336e-04 - acc: 0.8481 - val_loss: 0.0010 - val_acc: 0.8037
Epoch 129/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.3130e-04 - acc: 0.8461Epoch 00128: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.3132e-04 - acc: 0.8464 - val_loss: 0.0013 - val_acc: 0.8248
Epoch 130/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.0648e-04 - acc: 0.8402Epoch 00129: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.0590e-04 - acc: 0.8405 - val_loss: 0.0011 - val_acc: 0.8037
Epoch 131/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.1192e-04 - acc: 0.8508Epoch 00130: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.1096e-04 - acc: 0.8516 - val_loss: 0.0012 - val_acc: 0.8271
Epoch 132/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.1121e-04 - acc: 0.8514Epoch 00131: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.1168e-04 - acc: 0.8516 - val_loss: 0.0012 - val_acc: 0.8154
Epoch 133/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.1041e-04 - acc: 0.8485Epoch 00132: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.1049e-04 - acc: 0.8464 - val_loss: 0.0011 - val_acc: 0.8178
Epoch 134/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.9823e-04 - acc: 0.8331Epoch 00133: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.9987e-04 - acc: 0.8347 - val_loss: 0.0012 - val_acc: 0.8131
Epoch 135/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.8305e-04 - acc: 0.8461Epoch 00134: val_loss improved from 0.00104 to 0.00101, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 5.8349e-04 - acc: 0.8464 - val_loss: 0.0010 - val_acc: 0.8154
Epoch 136/250
1696/1712 [============================>.] - ETA: 0s - loss: 6.0350e-04 - acc: 0.8496Epoch 00135: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 6.0212e-04 - acc: 0.8493 - val_loss: 0.0011 - val_acc: 0.8107
Epoch 137/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.8441e-04 - acc: 0.8367Epoch 00136: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.8385e-04 - acc: 0.8364 - val_loss: 0.0012 - val_acc: 0.8037
Epoch 138/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.6725e-04 - acc: 0.8561Epoch 00137: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.6910e-04 - acc: 0.8557 - val_loss: 0.0011 - val_acc: 0.8084
Epoch 139/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.5963e-04 - acc: 0.8426Epoch 00138: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.6017e-04 - acc: 0.8417 - val_loss: 0.0011 - val_acc: 0.8201
Epoch 140/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.6860e-04 - acc: 0.8614Epoch 00139: val_loss improved from 0.00101 to 0.00100, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 5.6842e-04 - acc: 0.8604 - val_loss: 9.9596e-04 - val_acc: 0.8084
Epoch 141/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.7707e-04 - acc: 0.8461Epoch 00140: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.7652e-04 - acc: 0.8464 - val_loss: 0.0011 - val_acc: 0.8084
Epoch 142/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.6506e-04 - acc: 0.8479Epoch 00141: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.6523e-04 - acc: 0.8475 - val_loss: 0.0011 - val_acc: 0.8224
Epoch 143/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.5393e-04 - acc: 0.8485Epoch 00142: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.5529e-04 - acc: 0.8493 - val_loss: 0.0013 - val_acc: 0.8201
Epoch 144/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.6804e-04 - acc: 0.8550Epoch 00143: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.6791e-04 - acc: 0.8557 - val_loss: 0.0012 - val_acc: 0.8248
Epoch 145/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.4601e-04 - acc: 0.8496Epoch 00144: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.4490e-04 - acc: 0.8511 - val_loss: 0.0011 - val_acc: 0.8224
Epoch 146/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.4020e-04 - acc: 0.8514Epoch 00145: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.4003e-04 - acc: 0.8516 - val_loss: 0.0012 - val_acc: 0.8271
Epoch 147/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.3797e-04 - acc: 0.8479Epoch 00146: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.3797e-04 - acc: 0.8470 - val_loss: 0.0011 - val_acc: 0.8107
Epoch 148/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.3872e-04 - acc: 0.8561Epoch 00147: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.3760e-04 - acc: 0.8569 - val_loss: 0.0010 - val_acc: 0.8061
Epoch 149/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.5181e-04 - acc: 0.8650Epoch 00148: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.5098e-04 - acc: 0.8662 - val_loss: 0.0010 - val_acc: 0.8224
Epoch 150/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.4196e-04 - acc: 0.8620Epoch 00149: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.4174e-04 - acc: 0.8633 - val_loss: 0.0011 - val_acc: 0.8131
Epoch 151/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.3019e-04 - acc: 0.8396Epoch 00150: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.3064e-04 - acc: 0.8411 - val_loss: 0.0013 - val_acc: 0.8318
Epoch 152/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.3489e-04 - acc: 0.8626Epoch 00151: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.3499e-04 - acc: 0.8627 - val_loss: 0.0011 - val_acc: 0.8178
Epoch 153/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.3890e-04 - acc: 0.8597Epoch 00152: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.3784e-04 - acc: 0.8598 - val_loss: 0.0011 - val_acc: 0.8107
Epoch 154/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.1566e-04 - acc: 0.8526Epoch 00153: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.1664e-04 - acc: 0.8528 - val_loss: 0.0011 - val_acc: 0.8061
Epoch 155/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.1104e-04 - acc: 0.8644Epoch 00154: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.1076e-04 - acc: 0.8639 - val_loss: 0.0011 - val_acc: 0.8084
Epoch 156/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.0305e-04 - acc: 0.8620Epoch 00155: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.0359e-04 - acc: 0.8604 - val_loss: 0.0011 - val_acc: 0.8131
Epoch 157/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.0012e-04 - acc: 0.8644Epoch 00156: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.9999e-04 - acc: 0.8639 - val_loss: 0.0011 - val_acc: 0.8318
Epoch 158/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.9200e-04 - acc: 0.8526Epoch 00157: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.9225e-04 - acc: 0.8522 - val_loss: 0.0011 - val_acc: 0.8201
Epoch 159/250
1696/1712 [============================>.] - ETA: 0s - loss: 5.0561e-04 - acc: 0.8597Epoch 00158: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 5.0629e-04 - acc: 0.8581 - val_loss: 0.0011 - val_acc: 0.8271
Epoch 160/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.9514e-04 - acc: 0.8603Epoch 00159: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.9615e-04 - acc: 0.8610 - val_loss: 0.0011 - val_acc: 0.8037
Epoch 161/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.9047e-04 - acc: 0.8561Epoch 00160: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.9062e-04 - acc: 0.8569 - val_loss: 0.0012 - val_acc: 0.7921
Epoch 162/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.8630e-04 - acc: 0.8555Epoch 00161: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.8497e-04 - acc: 0.8563 - val_loss: 0.0011 - val_acc: 0.8131
Epoch 163/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.8479e-04 - acc: 0.8520- ETA: 1s - loss: Epoch 00162: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.8386e-04 - acc: 0.8511 - val_loss: 0.0011 - val_acc: 0.8154
Epoch 164/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.9327e-04 - acc: 0.8597Epoch 00163: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.9228e-04 - acc: 0.8592 - val_loss: 0.0011 - val_acc: 0.8224
Epoch 165/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.7668e-04 - acc: 0.8779Epoch 00164: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.7576e-04 - acc: 0.8785 - val_loss: 0.0012 - val_acc: 0.8178
Epoch 166/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.7451e-04 - acc: 0.8650Epoch 00165: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.7486e-04 - acc: 0.8645 - val_loss: 0.0010 - val_acc: 0.8201
Epoch 167/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.9378e-04 - acc: 0.8656- ETA: 1s - loss: Epoch 00166: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.9443e-04 - acc: 0.8662 - val_loss: 0.0011 - val_acc: 0.8061
Epoch 168/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.9534e-04 - acc: 0.8667- ETA: 1s - loss: Epoch 00167: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.9654e-04 - acc: 0.8645 - val_loss: 0.0010 - val_acc: 0.8248
Epoch 169/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.6790e-04 - acc: 0.8667Epoch 00168: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.6906e-04 - acc: 0.8680 - val_loss: 0.0011 - val_acc: 0.8154
Epoch 170/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.6703e-04 - acc: 0.8614Epoch 00169: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.6723e-04 - acc: 0.8616 - val_loss: 0.0013 - val_acc: 0.8318
Epoch 171/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.6634e-04 - acc: 0.8608Epoch 00170: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.6632e-04 - acc: 0.8598 - val_loss: 0.0011 - val_acc: 0.8107
Epoch 172/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.6368e-04 - acc: 0.8567Epoch 00171: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.6419e-04 - acc: 0.8569 - val_loss: 0.0011 - val_acc: 0.8271
Epoch 173/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.5236e-04 - acc: 0.8715Epoch 00172: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.5260e-04 - acc: 0.8709 - val_loss: 0.0011 - val_acc: 0.8154
Epoch 174/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.3699e-04 - acc: 0.8768- ETA: 1s - loss: Epoch 00173: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.3729e-04 - acc: 0.8762 - val_loss: 0.0010 - val_acc: 0.8131
Epoch 175/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.6360e-04 - acc: 0.8608Epoch 00174: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.6398e-04 - acc: 0.8616 - val_loss: 0.0010 - val_acc: 0.8341
Epoch 176/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.4670e-04 - acc: 0.8626- ETA: 1s - loss: Epoch 00175: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.4676e-04 - acc: 0.8639 - val_loss: 0.0011 - val_acc: 0.8318
Epoch 177/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.4963e-04 - acc: 0.8656Epoch 00176: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.4862e-04 - acc: 0.8645 - val_loss: 0.0010 - val_acc: 0.8271
Epoch 178/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.4375e-04 - acc: 0.8608Epoch 00177: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.4422e-04 - acc: 0.8616 - val_loss: 0.0010 - val_acc: 0.8131
Epoch 179/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.6594e-04 - acc: 0.8691Epoch 00178: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.6575e-04 - acc: 0.8703 - val_loss: 0.0012 - val_acc: 0.8131
Epoch 180/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.5423e-04 - acc: 0.8561Epoch 00179: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.5532e-04 - acc: 0.8563 - val_loss: 0.0010 - val_acc: 0.8248
Epoch 181/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.5370e-04 - acc: 0.8756Epoch 00180: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.5432e-04 - acc: 0.8762 - val_loss: 0.0011 - val_acc: 0.8458
Epoch 182/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.3490e-04 - acc: 0.8650Epoch 00181: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.3459e-04 - acc: 0.8662 - val_loss: 0.0011 - val_acc: 0.8388
Epoch 183/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.3170e-04 - acc: 0.8768Epoch 00182: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.3215e-04 - acc: 0.8768 - val_loss: 0.0011 - val_acc: 0.8364
Epoch 184/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.3165e-04 - acc: 0.8709Epoch 00183: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.3197e-04 - acc: 0.8721 - val_loss: 0.0010 - val_acc: 0.8364
Epoch 185/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.4969e-04 - acc: 0.8691Epoch 00184: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.4892e-04 - acc: 0.8703 - val_loss: 9.9813e-04 - val_acc: 0.8318
Epoch 186/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.3572e-04 - acc: 0.8762Epoch 00185: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.3574e-04 - acc: 0.8768 - val_loss: 0.0010 - val_acc: 0.8341
Epoch 187/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.2297e-04 - acc: 0.8644Epoch 00186: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.2296e-04 - acc: 0.8645 - val_loss: 0.0011 - val_acc: 0.8271
Epoch 188/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.2580e-04 - acc: 0.8815- ETA: 1s - loss: Epoch 00187: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.2652e-04 - acc: 0.8826 - val_loss: 0.0012 - val_acc: 0.8201
Epoch 189/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.2569e-04 - acc: 0.8455Epoch 00188: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.2554e-04 - acc: 0.8464 - val_loss: 0.0011 - val_acc: 0.8271
Epoch 190/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.1294e-04 - acc: 0.8650Epoch 00189: val_loss improved from 0.00100 to 0.00097, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 4.1367e-04 - acc: 0.8657 - val_loss: 9.6943e-04 - val_acc: 0.8318
Epoch 191/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.1860e-04 - acc: 0.8644Epoch 00190: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.1844e-04 - acc: 0.8651 - val_loss: 0.0010 - val_acc: 0.8294
Epoch 192/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.2424e-04 - acc: 0.8785Epoch 00191: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.2426e-04 - acc: 0.8785 - val_loss: 0.0010 - val_acc: 0.8201
Epoch 193/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.1843e-04 - acc: 0.8691- ETA: 1s - loss: Epoch 00192: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.1985e-04 - acc: 0.8692 - val_loss: 0.0011 - val_acc: 0.8388
Epoch 194/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.1069e-04 - acc: 0.8721Epoch 00193: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.1152e-04 - acc: 0.8721 - val_loss: 0.0011 - val_acc: 0.8341
Epoch 195/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.2251e-04 - acc: 0.8703Epoch 00194: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.2206e-04 - acc: 0.8697 - val_loss: 0.0011 - val_acc: 0.8248
Epoch 196/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.0747e-04 - acc: 0.8673Epoch 00195: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0677e-04 - acc: 0.8686 - val_loss: 0.0011 - val_acc: 0.8107
Epoch 197/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.0445e-04 - acc: 0.8768Epoch 00196: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0426e-04 - acc: 0.8762 - val_loss: 9.8824e-04 - val_acc: 0.8201
Epoch 198/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.0928e-04 - acc: 0.8673Epoch 00197: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0925e-04 - acc: 0.8674 - val_loss: 9.9951e-04 - val_acc: 0.8248
Epoch 199/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.1762e-04 - acc: 0.8844Epoch 00198: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.1833e-04 - acc: 0.8838 - val_loss: 0.0011 - val_acc: 0.8084
Epoch 200/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.0942e-04 - acc: 0.8656Epoch 00199: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0912e-04 - acc: 0.8662 - val_loss: 0.0010 - val_acc: 0.8224
Epoch 201/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.0260e-04 - acc: 0.8667Epoch 00200: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0245e-04 - acc: 0.8668 - val_loss: 0.0011 - val_acc: 0.7991
Epoch 202/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.0481e-04 - acc: 0.8667Epoch 00201: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0410e-04 - acc: 0.8662 - val_loss: 0.0011 - val_acc: 0.8131
Epoch 203/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.8688e-04 - acc: 0.8721Epoch 00202: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.8785e-04 - acc: 0.8727 - val_loss: 9.9782e-04 - val_acc: 0.8271
Epoch 204/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.9896e-04 - acc: 0.8809Epoch 00203: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.9781e-04 - acc: 0.8814 - val_loss: 0.0011 - val_acc: 0.8201
Epoch 205/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.1279e-04 - acc: 0.8679Epoch 00204: val_loss improved from 0.00097 to 0.00096, saving model to my_model.h5
1712/1712 [==============================] - 2s - loss: 4.1279e-04 - acc: 0.8686 - val_loss: 9.5625e-04 - val_acc: 0.8107
Epoch 206/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.9202e-04 - acc: 0.8691Epoch 00205: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.9155e-04 - acc: 0.8692 - val_loss: 0.0011 - val_acc: 0.8271
Epoch 207/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.0341e-04 - acc: 0.8768Epoch 00206: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0320e-04 - acc: 0.8756 - val_loss: 0.0010 - val_acc: 0.8131
Epoch 208/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.9998e-04 - acc: 0.8821Epoch 00207: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0097e-04 - acc: 0.8832 - val_loss: 0.0010 - val_acc: 0.8224
Epoch 209/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.9468e-04 - acc: 0.8874Epoch 00208: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.9455e-04 - acc: 0.8861 - val_loss: 0.0010 - val_acc: 0.8294
Epoch 210/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.9384e-04 - acc: 0.8744Epoch 00209: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.9397e-04 - acc: 0.8738 - val_loss: 0.0011 - val_acc: 0.8201
Epoch 211/250
1696/1712 [============================>.] - ETA: 0s - loss: 4.0641e-04 - acc: 0.8850Epoch 00210: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 4.0653e-04 - acc: 0.8849 - val_loss: 0.0012 - val_acc: 0.8201
Epoch 212/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.9320e-04 - acc: 0.8585Epoch 00211: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.9295e-04 - acc: 0.8586 - val_loss: 9.9774e-04 - val_acc: 0.8364
Epoch 213/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.8949e-04 - acc: 0.8709Epoch 00212: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.8891e-04 - acc: 0.8703 - val_loss: 0.0010 - val_acc: 0.8364
Epoch 214/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.7759e-04 - acc: 0.8827Epoch 00213: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.7766e-04 - acc: 0.8814 - val_loss: 0.0010 - val_acc: 0.8364
Epoch 215/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.8407e-04 - acc: 0.8744Epoch 00214: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.8448e-04 - acc: 0.8727 - val_loss: 0.0010 - val_acc: 0.8341
Epoch 216/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.8756e-04 - acc: 0.8632Epoch 00215: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.8771e-04 - acc: 0.8633 - val_loss: 0.0011 - val_acc: 0.8388
Epoch 217/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.7146e-04 - acc: 0.8821Epoch 00216: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.7168e-04 - acc: 0.8814 - val_loss: 0.0011 - val_acc: 0.8224
Epoch 218/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.8511e-04 - acc: 0.8732- ETA: 1s - loss: Epoch 00217: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.8453e-04 - acc: 0.8732 - val_loss: 0.0010 - val_acc: 0.8154
Epoch 219/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.8124e-04 - acc: 0.8774Epoch 00218: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.8073e-04 - acc: 0.8773 - val_loss: 9.9407e-04 - val_acc: 0.8154
Epoch 220/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.6818e-04 - acc: 0.8833Epoch 00219: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.6822e-04 - acc: 0.8826 - val_loss: 0.0010 - val_acc: 0.8201
Epoch 221/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.7465e-04 - acc: 0.8721Epoch 00220: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.7496e-04 - acc: 0.8715 - val_loss: 0.0011 - val_acc: 0.8248
Epoch 222/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.7384e-04 - acc: 0.8809Epoch 00221: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.7470e-04 - acc: 0.8808 - val_loss: 0.0010 - val_acc: 0.8248
Epoch 223/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.8209e-04 - acc: 0.8779Epoch 00222: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.8230e-04 - acc: 0.8779 - val_loss: 9.7601e-04 - val_acc: 0.8294
Epoch 224/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.7221e-04 - acc: 0.8821Epoch 00223: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.7201e-04 - acc: 0.8826 - val_loss: 9.7508e-04 - val_acc: 0.8271
Epoch 225/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.6136e-04 - acc: 0.8803Epoch 00224: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.6168e-04 - acc: 0.8797 - val_loss: 0.0010 - val_acc: 0.8248
Epoch 226/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.7071e-04 - acc: 0.8679Epoch 00225: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.7074e-04 - acc: 0.8680 - val_loss: 0.0010 - val_acc: 0.8061
Epoch 227/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.6423e-04 - acc: 0.8738Epoch 00226: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.6359e-04 - acc: 0.8750 - val_loss: 0.0010 - val_acc: 0.8318
Epoch 228/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.6999e-04 - acc: 0.8791Epoch 00227: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.6990e-04 - acc: 0.8797 - val_loss: 0.0010 - val_acc: 0.8201
Epoch 229/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.6387e-04 - acc: 0.8809Epoch 00228: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.6500e-04 - acc: 0.8803 - val_loss: 0.0011 - val_acc: 0.8248
Epoch 230/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.6134e-04 - acc: 0.8662- ETA: 1s - loss: Epoch 00229: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.6165e-04 - acc: 0.8662 - val_loss: 9.8022e-04 - val_acc: 0.8318
Epoch 231/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.6556e-04 - acc: 0.8868Epoch 00230: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.6566e-04 - acc: 0.8867 - val_loss: 0.0012 - val_acc: 0.8318
Epoch 232/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.5937e-04 - acc: 0.8785Epoch 00231: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.5915e-04 - acc: 0.8779 - val_loss: 0.0011 - val_acc: 0.8411
Epoch 233/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.5347e-04 - acc: 0.8797Epoch 00232: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.5395e-04 - acc: 0.8791 - val_loss: 0.0011 - val_acc: 0.8364
Epoch 234/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.6852e-04 - acc: 0.8785Epoch 00233: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.6903e-04 - acc: 0.8779 - val_loss: 0.0010 - val_acc: 0.8201
Epoch 235/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.5390e-04 - acc: 0.8809Epoch 00234: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.5428e-04 - acc: 0.8803 - val_loss: 0.0010 - val_acc: 0.8248
Epoch 236/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.5509e-04 - acc: 0.8880Epoch 00235: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.5613e-04 - acc: 0.8879 - val_loss: 9.8635e-04 - val_acc: 0.8318
Epoch 237/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.5220e-04 - acc: 0.8856Epoch 00236: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.5223e-04 - acc: 0.8861 - val_loss: 0.0010 - val_acc: 0.8341
Epoch 238/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.4804e-04 - acc: 0.8833Epoch 00237: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.4743e-04 - acc: 0.8832 - val_loss: 0.0011 - val_acc: 0.8178
Epoch 239/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.5358e-04 - acc: 0.8903Epoch 00238: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.5313e-04 - acc: 0.8896 - val_loss: 0.0010 - val_acc: 0.8318
Epoch 240/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.4816e-04 - acc: 0.8744Epoch 00239: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.4846e-04 - acc: 0.8756 - val_loss: 0.0010 - val_acc: 0.8411
Epoch 241/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.4178e-04 - acc: 0.8785Epoch 00240: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.4173e-04 - acc: 0.8785 - val_loss: 0.0010 - val_acc: 0.8201
Epoch 242/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.5412e-04 - acc: 0.8886Epoch 00241: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.5445e-04 - acc: 0.8884 - val_loss: 0.0011 - val_acc: 0.8294
Epoch 243/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.3926e-04 - acc: 0.8868Epoch 00242: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.3931e-04 - acc: 0.8861 - val_loss: 0.0010 - val_acc: 0.8341
Epoch 244/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.4858e-04 - acc: 0.8833Epoch 00243: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.4910e-04 - acc: 0.8838 - val_loss: 0.0010 - val_acc: 0.8271
Epoch 245/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.5367e-04 - acc: 0.8821- ETA: 1s - loss: Epoch 00244: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.5363e-04 - acc: 0.8826 - val_loss: 0.0011 - val_acc: 0.8154
Epoch 246/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.4464e-04 - acc: 0.8715Epoch 00245: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.4454e-04 - acc: 0.8715 - val_loss: 0.0010 - val_acc: 0.8248
Epoch 247/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.4259e-04 - acc: 0.8750Epoch 00246: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.4315e-04 - acc: 0.8756 - val_loss: 0.0011 - val_acc: 0.7991
Epoch 248/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.4953e-04 - acc: 0.8721Epoch 00247: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.4889e-04 - acc: 0.8721 - val_loss: 0.0010 - val_acc: 0.8435
Epoch 249/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.3559e-04 - acc: 0.8886Epoch 00248: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.3565e-04 - acc: 0.8890 - val_loss: 0.0010 - val_acc: 0.8131
Epoch 250/250
1696/1712 [============================>.] - ETA: 0s - loss: 3.4690e-04 - acc: 0.8738Epoch 00249: val_loss did not improve
1712/1712 [==============================] - 2s - loss: 3.4628e-04 - acc: 0.8744 - val_loss: 0.0010 - val_acc: 0.8341

Step 7: Visualize the Loss and Test Predictions

(IMPLEMENTATION) Answer a few questions and visualize the loss

Question 1: Outline the steps you took to get to your final neural network architecture and your reasoning at each step.

Answer: I Started with 32 filters and after it, I'll give the model more and more filters, from my experience, most of the times the best way is to start with 16 filters or 32 filters.

I'm using the most popular and recommended activation function 'relu' and padding='same' so we won't lose important data, though in this case, it seems that the data on the edges is less important than the center, so it may be less important.

I give it kernel_size of 3, since from my experience its best to be set to 2 or 3, so I start with 3 on the first layer and will change it to 2 on the others where I'll have smaller width and height input.

After each convolution layer I added a pooling layer, it will progressively reduce the spatial size of the representation, and reduce the number of parameters and amount of computation so it gives us the ability to increase the filters count without having too many parameters, and it also helps to control overfitting. I'm using the MaxPooling2D with pool_size=2 which is most common and recommended.

After each convolution layer, I'm also Using Dropout in order to reduce even more the number of parameters and amount of computation and helps to control overfitting.

After all those layers I Added GlobalAveragePooling2D layer, doing it like in the ResNet-50 model (http://ethereon.github.io/netscope/#/gist/db945b393d40bfa26006) it enforces correspondence between feature maps and categories, and also global average pooling is a structural regularizer, which natively prevents overfitting for the overall structure (I get my knowledge from this paper: http://arxiv.org/pdf/1312.4400.pdf).

Finally I added a dense layer with output of 30 that will represent the 15 key points (15 for x + 15 for y), and as activation I used the linear activation, I could use tanh activation cause it gives values between -1 to 1 but it may act less linear around the 0 and since we have linear problem, we better to used linear activation which means adding no activation at keras.

Question 2: Defend your choice of optimizer. Which optimizers did you test, and how did you determine which worked best?

Answer: Since I know SGD most of the times is trained slower than others optimizer, so I didn't try it.

I did try all the other known optimizers(RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam), and I find that Adamax worked the best, it got val_loss of 0.00096 while all the others didn't get less than 0.0011. (on 250 epochs)

Use the code cell below to plot the training and validation loss of your neural network. You may find this resource useful.

In [39]:
## TODO: Visualize the training and validation loss of your neural network

# list all data in history
print(hist.history.keys())
# summarize history for accuracy
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])

Question 3: Do you notice any evidence of overfitting or underfitting in the above plot? If so, what steps have you taken to improve your model? Note that slight overfitting or underfitting will not hurt your chances of a successful submission, as long as you have attempted some solutions towards improving your model (such as regularization, dropout, increased/decreased number of layers, etc).

Answer: On my first architecture I had only 4 convolution layers and dropout of only 0.1 on each layer, It got me pretty good results of val_loss = 0.0015, but I did find evidence of overfitting when the train loss was getting lower but the test lost was't changing, So I increased the dropout to be 0.3 and 0.2, and in order not to lose accuracy I also added one more convolution layer with output of 512. Those changes were really helpful, now I have less overfitting and it starts later, and my best val_loss improved to just 0.00096.

Visualize a Subset of the Test Predictions

Execute the code cell below to visualize your model's predicted keypoints on a subset of the testing images.

In [40]:
y_test = model.predict(X_test)
fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_test[i], y_test[i], ax)

Step 8: Complete the pipeline

With the work you did in Sections 1 and 2 of this notebook, along with your freshly trained facial keypoint detector, you can now complete the full pipeline. That is given a color image containing a person or persons you can now

  • Detect the faces in this image automatically using OpenCV
  • Predict the facial keypoints in each face detected in the image
  • Paint predicted keypoints on each face detected

In this Subsection you will do just this!

(IMPLEMENTATION) Facial Keypoints Detector

Use the OpenCV face detection functionality you built in previous Sections to expand the functionality of your keypoints detector to color images with arbitrary size. Your function should perform the following steps

  1. Accept a color image.
  2. Convert the image to grayscale.
  3. Detect and crop the face contained in the image.
  4. Locate the facial keypoints in the cropped image.
  5. Overlay the facial keypoints in the original (color, uncropped) image.

Note: Step 4 can be the trickiest because remember your convolutional network is only trained to detect facial keypoints in $96 \times 96$ grayscale images where each pixel was normalized to lie in the interval $[0,1]$, and remember that each facial keypoint was normalized during training to the interval $[-1,1]$.

This means - practically speaking - to paint detected keypoints onto a test face you need to perform this same pre-processing to your candidate face - that is after detecting it you should resize it to $96 \times 96$ and normalize its values before feeding it into your facial keypoint detector.

To be shown correctly on the original image the output keypoints from your detector then need to be shifted and re-normalized from the interval $[-1,1]$ to the width and height of your detected face.

When complete you should be able to produce example images like the one below

In [11]:
# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')


# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_copy = np.copy(image)


# plot our image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('image copy')
ax1.imshow(image_copy)
Out[11]:
<matplotlib.image.AxesImage at 0x1ef83b6ae10>
In [3]:
### TODO: Use the face detection code we saw in Section 1 with your trained conv-net 
## TODO : Paint the predicted keypoints on the test image
def find_keypoints(image):
    # Convert the RGB  image to grayscale
    gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

    # Extract the pre-trained face detector from an xml file
    face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

    # Detect the faces in image
    faces = face_cascade.detectMultiScale(gray, 1.25, 6)

    # Print the number of faces detected in the image
    #print('Number of faces detected:', len(faces))

    # Make a copy of the orginal image to draw face detections on
    image_with_predicted_keypoints = np.copy(image)
    
    faces_keypoints = []
    
    for (x,y,w,h) in faces:
        # Crop and resize the face
        face_image = gray[y:y+h , x:x+w]
        resize_face = cv2.resize(face_image, (96, 96))

        # Normalized and convert to the input format
        normalized_face = resize_face / 255
        normalized_face = normalized_face[np.newaxis, :, :, np.newaxis]

        # Predict the face keypoints and un-normalized the coordinates
        keypoints = model.predict(normalized_face)
        keypoints = keypoints * 48 + 48

        # Rescale to the orifinal image scale and draw the keypoints
        coordinats_x = keypoints[0][0::2] # [startAt:endBefore:skip]
        coordinats_y = keypoints[0][1::2]
        
        coordinats_x = x + coordinats_x * w / 96
        coordinats_y = y + coordinats_y * h / 96
        cv2.rectangle(image_with_predicted_keypoints, (x,y), (x+w,y+h), (255,0,0), 3)

        faces_keypoints.append((coordinats_x, coordinats_y))
    
        for xc, yc in zip(coordinats_x, coordinats_y):
            cv2.circle(image_with_predicted_keypoints, (xc, yc), 1, (0, 255, 0), 3)
             
    return image_with_predicted_keypoints, faces_keypoints
In [55]:
from keras.models import load_model
model = load_model('my_model.h5')

image_with_predicted_keypoints, faces_keypoints = find_keypoints(image)

# Display the image with the keypoints
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with predicted keypoints')
ax1.imshow(image_with_predicted_keypoints)
Number of faces detected: 2
Out[55]:
<matplotlib.image.AxesImage at 0x1efd59cfdd8>

(Optional) Further Directions - add a filter using facial keypoints to your laptop camera

Now you can add facial keypoint detection to your laptop camera - as illustrated in the gif below.

The next Python cell contains the basic laptop video camera function used in the previous optional video exercises. Combine it with the functionality you developed for keypoint detection and marking in the previous exercise and you should be good to go!

In [56]:
import cv2
import time 
from keras.models import load_model

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(1)  #I have 2 cameras

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
        
    # Extract the pre-trained face detector from an xml file
    face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

    # keep video stream open
    while rval:
        # Convert the RGB  image to grayscale
        gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
        
        # Detect the faces in image
        faces = face_cascade.detectMultiScale(gray, 1.25, 6)

        # Make a copy of the orginal image to draw face detections on
        image_with_predicted_keypoints = np.copy(image)

        for (x,y,w,h) in faces:
            # Crop and resize the face
            face_image = gray[y:y+h , x:x+w]
            resize_face = cv2.resize(face_image, (96, 96))

            # Normalized and convert to the input format
            normalized_face = resize_face / 255
            normalized_face = normalized_face[np.newaxis, :, :, np.newaxis]

            # Predict the face keypoints and un-normalized the coordinates
            keypoints = model.predict(normalized_face)
            keypoints = keypoints * 48 + 48

            # Rescale to the orifinal image scale and draw the keypoints
            coordinats_x = keypoints[0][0::2] # [startAt:endBefore:skip]
            coordinats_y = keypoints[0][1::2]

            coordinats_x = x + coordinats_x * w / 96
            coordinats_y = y + coordinats_y * h / 96
            cv2.rectangle(frame, (x,y), (x+w,y+h), (255,0,0), 3)

            for xc, yc in zip(coordinats_x, coordinats_y):
                cv2.circle(frame, (xc, yc), 1, (0, 255, 0), 3)

        # plot image from camera with detections marked
        cv2.imshow("face detection activated", frame)
        
        # exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key > 0: # exit by pressing any key
            # destroy windows
            cv2.destroyAllWindows()
            
            # hack from stack overflow for making sure window closes on osx --> https://stackoverflow.com/questions/6116564/destroywindow-does-not-close-window-on-mac-using-python-and-opencv
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()  
In [57]:
from keras.models import load_model
model = load_model('my_model.h5')

# Run your keypoint face painter
laptop_camera_go()

(Optional) Further Directions - add a filter using facial keypoints

Using your freshly minted facial keypoint detector pipeline you can now do things like add fun filters to a person's face automatically. In this optional exercise you can play around with adding sunglasses automatically to each individual's face in an image as shown in a demonstration image below.

To produce this effect an image of a pair of sunglasses shown in the Python cell below.

In [10]:
# Load in sunglasses image - note the usage of the special option
# cv2.IMREAD_UNCHANGED, this option is used because the sunglasses 
# image has a 4th channel that allows us to control how transparent each pixel in the image is
sunglasses = cv2.imread("images/sunglasses_4.png", cv2.IMREAD_UNCHANGED)

# Plot the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.imshow(sunglasses)
ax1.axis('off');

This image is placed over each individual's face using the detected eye points to determine the location of the sunglasses, and eyebrow points to determine the size that the sunglasses should be for each person (one could also use the nose point to determine this).

Notice that this image actually has 4 channels, not just 3.

In [26]:
# Print out the shape of the sunglasses image
print ('The sunglasses image has shape: ' + str(np.shape(sunglasses)))
The sunglasses image has shape: (1123, 3064, 4)

It has the usual red, blue, and green channels any color image has, with the 4th channel representing the transparency level of each pixel in the image. Here's how the transparency channel works: the lower the value, the more transparent the pixel will become. The lower bound (completely transparent) is zero here, so any pixels set to 0 will not be seen.

This is how we can place this image of sunglasses on someone's face and still see the area around of their face where the sunglasses lie - because these pixels in the sunglasses image have been made completely transparent.

Lets check out the alpha channel of our sunglasses image in the next Python cell. Note because many of the pixels near the boundary are transparent we'll need to explicitly print out non-zero values if we want to see them.

In [27]:
# Print out the sunglasses transparency (alpha) channel
alpha_channel = sunglasses[:,:,3]
print ('the alpha channel here looks like')
print (alpha_channel)

# Just to double check that there are indeed non-zero values
# Let's find and print out every value greater than zero
values = np.where(alpha_channel != 0)
print ('\n the non-zero values of the alpha channel look like')
print (values)
the alpha channel here looks like
[[0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 ..., 
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]]

 the non-zero values of the alpha channel look like
(array([  17,   17,   17, ..., 1109, 1109, 1109], dtype=int64), array([ 687,  688,  689, ..., 2376, 2377, 2378], dtype=int64))

This means that when we place this sunglasses image on top of another image, we can use the transparency channel as a filter to tell us which pixels to overlay on a new image (only the non-transparent ones with values greater than zero).

One last thing: it's helpful to understand which keypoint belongs to the eyes, mouth, etc. So, in the image below, we also display the index of each facial keypoint directly on the image so that you can tell which keypoints are for the eyes, eyebrows, etc.

With this information, you're well on your way to completing this filtering task! See if you can place the sunglasses automatically on the individuals in the image loaded in / shown in the next Python cell.

In [5]:
# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        
# Plot the image
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
Out[5]:
<matplotlib.image.AxesImage at 0x1fef4cd2828>
In [17]:
## (Optional) TODO: Use the face detection code we saw in Section 1 with your trained conv-net to put
## sunglasses on the individuals in our test image

from keras.models import load_model
model = load_model('my_model.h5')
    
image_with_predicted_keypoints, faces_keypoints = find_keypoints(image)

image_with_glasses = np.copy(image)

for face_keypoint in faces_keypoints:
    coordinats_x, coordinats_y = face_keypoint
    height = ((coordinats_x[10] - coordinats_x[9])**2 + (coordinats_y[10] - coordinats_y[9])**2)**0.5 *0.6 # Distance bettwen 10 and 9
    add_to_side = height / 3
    add_to_top = height / 6
    
    ''' We want the sunglasses to be on the rigth angle,
            P1=(x1,y1)
                  |\
                A | \ 
                  |  \
                  |_B_\
            P2(x2,y3)  P3=(?,?)
            
    The math explained here: 
    https://math.stackexchange.com/questions/64823/how-to-find-the-third-coordinate-of-a-right-triangle-given-2-coordinates-and-len'''
    
    p1 = [coordinats_x[7] + add_to_side, coordinats_y[7] - add_to_top]
    p2 = [coordinats_x[9] - add_to_side, coordinats_y[9] - add_to_top]
    
    Ma = (p2[1] - p1[1]) / (p2[0] - p1[0]) # slop of A
    Mb = -1 / Ma # slop of B
    B = height
    
    if Ma > 0:
        p3 = [p1[0] - B*(1 / ((1 + Mb**2)**0.5)), p1[1] - B*(Mb / ((1 + Mb**2)**0.5))] # **0.5 = sqrt
    else:
        p3 = [p1[0] + B*(1 / ((1 + Mb**2)**0.5)), p1[1] + B*(Mb / ((1 + Mb**2)**0.5))] # **0.5 = sqrt
    
    # Find the new Ma and Mb in order to find P4
    Ma = (p1[1] - p2[1]) / (p1[0] - p2[0]) # slop of A
    Mb = -1 / Ma # slop of B
    if Ma > 0:
        p4 = [p2[0] - B*(1 / ((1 + Mb**2)**0.5)), p2[1] - B*(Mb / ((1 + Mb**2)**0.5))] 
    else:
        p4 = [p2[0] + B*(1 / ((1 + Mb**2)**0.5)), p2[1] + B*(Mb / ((1 + Mb**2)**0.5))] 
        
    destination_points = np.float32([p2, p4, p3,  p1])
    
    source_points = np.float32([[0, 0],
                                     [0, sunglasses.shape[0]],
                                     [sunglasses.shape[1], sunglasses.shape[0]],
                                     [sunglasses.shape[1], 0]])

    p_transform = cv2.getPerspectiveTransform(source_points, destination_points)
    size = (image.shape[1], image.shape[0])
    warped_glasses_image = cv2.warpPerspective(sunglasses, p_transform, size, flags=cv2.INTER_LINEAR)

    alpha_channel = warped_glasses_image[:,:,:3]
    image_with_glasses[alpha_channel != 0] = alpha_channel[alpha_channel != 0]

# Plot the image
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Image with glasses')
ax1.imshow(image_with_glasses)
Out[17]:
<matplotlib.image.AxesImage at 0x1fe80a8bdd8>

(Optional) Further Directions - add a filter using facial keypoints to your laptop camera

Now you can add the sunglasses filter to your laptop camera - as illustrated in the gif below.

The next Python cell contains the basic laptop video camera function used in the previous optional video exercises. Combine it with the functionality you developed for adding sunglasses to someone's face in the previous optional exercise and you should be good to go!

In [14]:
import cv2
import time 
from keras.models import load_model
import numpy as np

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(1)  #I have 2 cameras

    # try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep video stream open
    while rval:
        
        image_with_predicted_keypoints, faces_keypoints = find_keypoints(frame)

        for face_keypoint in faces_keypoints:
            coordinats_x, coordinats_y = face_keypoint
            
            height = ((coordinats_x[10] - coordinats_x[9])**2 + (coordinats_y[10] - coordinats_y[9])**2)**0.5 *0.6 # Distance bettwen 10 and 9
            add_to_side = height / 3
            add_to_top = height / 6
            ''' We want the sunglasses to be on the rigth angle,
                        P1=(x1,y1)
                              |\
                            A | \ 
                              |  \
                              |_B_\
                        P2(x2,y3)  P3=(?,?)

                The math explained here: 
                https://math.stackexchange.com/questions/64823/how-to-find-the-third-coordinate-of-a-right-triangle-given-2-coordinates-and-len'''
  
            p1 = [coordinats_x[7] + add_to_side, coordinats_y[7] - add_to_top]
            p2 = [coordinats_x[9] - add_to_side, coordinats_y[9] - add_to_top]

            Ma = (p2[1] - p1[1]) / (p2[0] - p1[0]) # slop of A
            Mb = -1 / Ma # slop of B
            B = height

            if Ma > 0:
                p3 = [p1[0] - B*(1 / ((1 + Mb**2)**0.5)), p1[1] - B*(Mb / ((1 + Mb**2)**0.5))] # **0.5 = sqrt
            else:
                p3 = [p1[0] + B*(1 / ((1 + Mb**2)**0.5)), p1[1] + B*(Mb / ((1 + Mb**2)**0.5))] # **0.5 = sqrt

            # Find the new Ma and Mb in order to find P4
            Ma = (p1[1] - p2[1]) / (p1[0] - p2[0]) # slop of A
            Mb = -1 / Ma # slop of B
            if Ma > 0:
                p4 = [p2[0] - B*(1 / ((1 + Mb**2)**0.5)), p2[1] - B*(Mb / ((1 + Mb**2)**0.5))] 
            else:
                p4 = [p2[0] + B*(1 / ((1 + Mb**2)**0.5)), p2[1] + B*(Mb / ((1 + Mb**2)**0.5))] 

            destination_points = np.float32([p2, p4, p3,  p1])
            
            source_points = np.float32([[0, 0],
                                     [0, sunglasses.shape[0]],
                                     [sunglasses.shape[1], sunglasses.shape[0]],
                                     [sunglasses.shape[1], 0]])

            p_transform = cv2.getPerspectiveTransform(source_points, destination_points)
            size = (frame.shape[1], frame.shape[0])
            warped_glasses_image = cv2.warpPerspective(sunglasses, p_transform, size, flags=cv2.INTER_LINEAR)

            alpha_channel = warped_glasses_image[:,:,:3]
            frame[alpha_channel != 0] = alpha_channel[alpha_channel != 0]
            
            ''' #for testing
            cv2.circle(frame, (int(p1[0]), int(p1[1])), 1, (0, 255, 255), 3)
            cv2.circle(frame, (int(p2[0]), int(p2[1])), 1, (0, 0, 255), 3)
            cv2.circle(frame, (int(p3[0]), int(p3[1])), 1, (0, 255, 0), 3)
            cv2.circle(frame, (int(p4[0]), int(p4[1])), 1, (255, 0, 0), 3)'''
    
        # Plot image from camera with detections marked
        cv2.imshow("face detection activated", frame)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key > 0: # exit by pressing any key
            # Destroy windows 
            cv2.destroyAllWindows()
            
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
        
In [15]:
# Load facial landmark detector model
model = load_model('my_model.h5')

# Run sunglasses painter
laptop_camera_go()